1,594 research outputs found
DiffNodesets: An Efficient Structure for Fast Mining Frequent Itemsets
Mining frequent itemsets is an essential problem in data mining and plays an
important role in many data mining applications. In recent years, some itemset
representations based on node sets have been proposed, which have shown to be
very efficient for mining frequent itemsets. In this paper, we propose
DiffNodeset, a novel and more efficient itemset representation, for mining
frequent itemsets. Based on the DiffNodeset structure, we present an efficient
algorithm, named dFIN, to mining frequent itemsets. To achieve high efficiency,
dFIN finds frequent itemsets using a set-enumeration tree with a hybrid search
strategy and directly enumerates frequent itemsets without candidate generation
under some case. For evaluating the performance of dFIN, we have conduct
extensive experiments to compare it against with existing leading algorithms on
a variety of real and synthetic datasets. The experimental results show that
dFIN is significantly faster than these leading algorithms.Comment: 22 pages, 13 figure
Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Current state-of-the-art results in computer vision depend in part on
fine-tuning large pre-trained vision models. However, with the exponential
growth of model sizes, the conventional full fine-tuning, which needs to store
a individual network copy for each tasks, leads to increasingly huge storage
and transmission overhead. Adapter-based Parameter-Efficient Tuning (PET)
methods address this challenge by tuning lightweight adapters inserted into the
frozen pre-trained models. In this paper, we investigate how to make adapters
even more efficient, reaching a new minimum size required to store a
task-specific fine-tuned network. Inspired by the observation that the
parameters of adapters converge at flat local minima, we find that adapters are
resistant to noise in parameter space, which means they are also resistant to
low numerical precision. To train low-precision adapters, we propose a
computational-efficient quantization method which minimizes the quantization
error. Through extensive experiments, we find that low-precision adapters
exhibit minimal performance degradation, and even 1-bit precision is sufficient
for adapters. The experimental results demonstrate that 1-bit adapters
outperform all other PET methods on both the VTAB-1K benchmark and few-shot
FGVC tasks, while requiring the smallest storage size. Our findings show, for
the first time, the significant potential of quantization techniques in PET,
providing a general solution to enhance the parameter efficiency of
adapter-based PET methods. Code: https://github.com/JieShibo/PETL-ViTComment: Accepted to ICCV 202
- …